sea's blog → Algebra, Lisp, and miscellaneous thoughts

Table of Contents

TF World - Central OpenTofu

I have one central opentofu repo that sets up permissions for all other project repositories to run their pipelines authenticated against AWS via OIDC, which then lets them spin up all of their resources.

The central management state also sets up any shared resources: Domains, Networking, State buckets, TF state lock tables, etc.

For the most part I very rarely have to edit this code, but it is the core of all of my infrastructure.

Introduction

This here is my tf-world opentofu configuration. I split it across many files for readability, as you do, but they're all centrally tangled by this one file.

This module in particular only sets up the central resources like networking, database, tf state buckets, and so on, as well as authorizes other pipelines to set up their own resources separately.

Any resources created in here are either legacy that haven't yet been migrated to their own projects, or shared.

Providers

terraform {
	required_providers {
		postgresql = {
			source = "cyrilgdn/postgresql"
			version = "~> 1.26"
		}
		random = {
			source = "hashicorp/random"
			version = "~> 3.6"
		}
		aws = {
			source = "hashicorp/aws"
		}		
		gitlab = {
			source = "gitlabhq/gitlab"
		}
	}
}

AWS Profile Stuff

Since I want to be able to run opentofu locally in addition to the pipelines, I want to specify some profile in particular.

Probably this should have been done via environment variable and a helper script, but doing it this way ensures that I can work on multiple projects across different orgs and still have all of my pipeline code be identical to what runs locally.

provider "aws" {
	region = "ca-central-1"
	profile = "tf-frostzero"	
}

I used to generate some certs in us-east-1 for cloudfront to use. (AWS has a hard requirement for those certs to live there), so I set up an alias aws provider that has that region set.

# This is for generation of certs in us-east-1 for use with cloudfront:
provider "aws" {
	region = "us-east-1"
	profile = "tf-frostzero"
	alias = "us-east-1"
}

Bootstrap Administration

In order to mess around in AWS I first create a day-to-day administration user. It's not really great practice to log in as the root account holder, unless doing some money-related things in there.

This sets up that admin user account and gives them full access.

# ========================================
# DAY TO DAY ADMIN USER
# ========================================
resource "aws_iam_user" "admin" {
	name = "frostzero-admin"
	path = "/"

	tags = {
		purpose = "Administrative access for daily use"
		managed_by = "opentofu"
	}
}

resource "aws_iam_user_policy_attachment" "admin_full_access" {
	user = aws_iam_user.admin.name
	policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}

resource "aws_iam_user_login_profile" "admin_profile" {
	user = aws_iam_user.admin.name
}

# For getting my AWS account ID (you need it to log in)
data "aws_caller_identity" "current" {}

output "admin_user" {
	sensitive = true
	value = {
		initial_password = aws_iam_user_login_profile.admin_profile.password
		arn = aws_iam_user.admin.arn
		name = aws_iam_user.admin.name
		aws_account_id = data.aws_caller_identity.current.account_id
	}
}

Overall Budget

I configure a catch-all overall budget for everything in AWS, with no cost filters.

Individual projects can have their own specialized budgets. This one is just an emergency alarm trigger in case I have a runaway lambda or something.

module "catch-all-budget" {
	source = "git::http://gitlab.com/sea/public/tf-modules/aws/budget.git"
	name = "General Catch-All"
	monthly-limit = 150
	currency = "CAD"
	notification-email-addresses = [local.personal-email-addr]	
	cost-filters = {}
}
output "budgets" {
	value = {
		"Catch All" = module.catch-all-budget.o
	}
}

Central Postgresql Database

I use an aurora RDS database centrally, though I try not to actually hit it often since it costs nothing when scaled to zero.

For most services I try to use dynamoDB. Only big things that absolutely require postgresql still talk to it. (Or old code)

Since databases are a bitch to set up in opentofu sometimes, I just did it manually and use a data source to point at it.

# This is a bitch to set up with iac, so I did it manually and just refer to it:
data "aws_db_instance" "psql" {
	db_instance_identifier = "frostzero-db-1-instance-1"
}

The actual database master password is in the tf state, unfortunately. There are some clever trickery ways that I can avoid that, by putting it into AWS secret manager and fetching it back out. That is a todo for later.

Master Password

  • TODO Move postgrsql master password into secret manager and pull it back out here.
  • END
    resource "random_password" "psql-master-password" {
    	# ToDo: This will import my master DB password into the terraform state
    	# Which actually quite a lot of things have permission to read..
    	# But then, I can see no elegant way to provide the password to terraform which is not trivially interceptible by anyone able to trigger a run of terraform anyway.
    	lifecycle {
    		ignore_changes = [special,length] # Since I import it into the state
    	}
    	special = true
    	length = 16
    }
    
    output "psql-master-password" {
    	sensitive = true
    	value = random_password.psql-master-password.result
    }
    

Postgresql Provider

In order to actually connect to the database and manage roles and databases and so forth, I need this provider:

provider "postgresql" {
	scheme = "awspostgres"
	host = data.aws_db_instance.psql.address
	port = data.aws_db_instance.psql.port
	database = data.aws_db_instance.psql.db_name # Connect to existing DB.
	username = data.aws_db_instance.psql.master_username
	password = random_password.psql-master-password.result
	sslmode = "require"
	superuser = false
}

DB Roles and Databases

resource "postgresql_role" "md5minmax" {
	name = "md5minmax"
	login = true
	password = random_password.md5minmax.result
}

resource "postgresql_database" "md5minmax" {
	name = "md5minmax"
	owner = postgresql_role.md5minmax.name
}

resource "random_password" "md5minmax" {
	length = 16
	special = false
}

Domains - DNS

I've registered my domain in route53, this file just uses some modules I've got to refer to those domains and set up any specific records I need.

The domain object that I have here is referred to in the tf state via remote state for almost anything that needs to set up its own domain. Mostly they just pull the zone-id and domain name, though.

The domain module here automatically also sets up e-mail related records with my personal email provider, and a few other bookkeeping things.

The delegated-zones part allows me to have my own subdomans managed somewhere else. For those I run my own bind9 dns server and point at them. The binding is configured here and is dualstack.

# It makes sense to set up a domain_descriptors local array and feed the variables in from a for_each..
# but when there's only one, why overcomplicate things?
module "domain_frostzero_ca" {
	source = "git::https://gitlab.com/sea/public/tf-modules/aws/route53-domain.git"
	import-domain = false # This is only needed initially when the domain is first set up.
	domain-name = "frostzero.ca" 
	enable-fastmail = true # Set up MX records related to fastmail.

	# Subdomains and the nameservers that are authoritative for them:	
	delegated-zones = {
		"sea" = {
			# ToDo: Instead of hardcoding these, import the other VPS provider into terraform and reference the server's properties	
			ipv4-1 = "172.105.7.205"
			ipv4-2 = "172.105.23.146"
			ipv6-1 = "2600:3c04::f03c:94ff:fe03:adfa"
			ipv6-2 = "2600:3c04::f03c:94ff:fe03:ad1a"
		},
	}
	# For cost tracking purposes:
	tags = {
		project = "frostzero-domain"
	}
}

# The module exports an 'o' gathered output and I can just pull it out:
output "domain-frostzero-ca" { value = module.domain_frostzero_ca.o }

These are just a few special DNS records that I bother to set up here centrally.

Actually, I can't even remember if these are still in use. I should get rid of them and see what breaks.

resource "aws_route53_record" "frost-dns-1" {
	zone_id = module.domain_frostzero_ca.zone-id
	name = "frost-dns-1"
	ttl = 300
	type = "AAAA"
	records = ["2600:3c04::2000:b7ff:fe68:d272"]
}
resource "aws_route53_record" "frost-dns-2" {
	zone_id = module.domain_frostzero_ca.zone-id
	name = "frost-dns-2"
	ttl = 300
	type = "AAAA"
	records = ["2600:3c04::2000:61ff:fe6e:f991"]
}

output "domains" {
	value = {
		"frostzero.ca" = module.domain_frostzero_ca
	}
}

Opentofu related - DynamoDB for state locks

This file configures a DynamoDB table for Terraform state locks.

# ========================================
# FOR TF STATE LOCKS
# ========================================

resource "aws_dynamodb_table" "tf-state-locks" {
	name = "tf-state-locks"
	billing_mode = "PAY_PER_REQUEST"

	hash_key = "LockID"
	attribute {
		name = "LockID"
		type = "S"
	}
	deletion_protection_enabled = true
	point_in_time_recovery { enabled = false }
	tags ={
		project = "tf-state"
	}
}

ECS - Running containers in fargate without caring about the VMs

I have a few simple projects that build containers. I can't be bothered to manage EC2 instances for them, so I have a central ECS cluster here, and the tasks are deployed on it in the code just below.

This central ECS cluster is another thing that might be referred to by other projects via the remote state.

module "central-ecs-cluster" {
	source = "git::https://gitlab.com/sea/public/tf-modules/aws/ecs-cluster.git"
	name = "frostzero"
	# ToDo: Set these up in module.
	# tags = {}
}

ECS Task - MD5 Extremes Search

This initially started off as a bs task, hence the name. Can't be bothered to move everything in the state to fix it.

In the future I would like to migate these to lambdas, though I'm aware of the 15-minute run limit. Maybe AWS batch is a better match.

# ==========
# Container tasks:
# ==========

module "bs-task-1" {
	source = "git::https://gitlab.com/sea/public/tf-modules/aws/ecs-container-task.git"
	name = "bs-1"
	image-path = "registry.gitlab.com/sea/public/md5-extremes-search:latest"
	desired-replica-count = 2
	cpu = 256 # 0.25 vCPU, minimum valid amount
	environment-json = <<-EOF
[
  {"name": "PSQL_CONNECTION_STRING", "value": "user=md5minmax host=${data.aws_db_instance.psql.address} password=${random_password.md5minmax.result} dbname=md5minmax sslmode=require"}
]
EOF
	memory = 512 # 0.5 GB, minimum valid amount
	subnets = module.dualstack-vpc-a.ipv6only-subnets-as-list
	assign-public-ip = false # How does this interact with the subnet being public ipv6-only? Hm.
	ecs-cluster = module.central-ecs-cluster.arn
	security-groups = [module.dualstack-vpc-a.default-security-group]
}

GitLab Pipeline Authorization

This allows GitLab pipelines to mess around with resources in AWS, and hence allows pipelines to deploy things on my behalf.

# Establish linkages between gitlab projects and AWS:

# Only one of these with a given url can exist, so pulled out of the module:
resource "aws_iam_openid_connect_provider" "gitlab" {
	url = "https://gitlab.com"
	client_id_list = [
		"https://gitlab.com" # Audience and claim that gitlab includes in the JWT
	]
	# The thumbprint list is optional for gitlab.com in the latest versions of the provider
	# It has it already in the root CA store
}

For each project that needs to talk to AWS, I have one module instantiation.

I should really make a 'project' module and have that do all this for me, as well as set up database tables and such.

Project - Personal Blog

module "oidc-blog" {
	source = "./oidc-connection"
	name = "blog"
	branch_ref = "sea/public/blog:ref_type:branch:ref:main"
	state_key = "blog.tf.state"
	openid_connect_provider_arn = aws_iam_openid_connect_provider.gitlab.arn
	extra_permissions = [{
		Effect = "Allow"
		Action = ["*"]
		# Also includes dev blog, and any objects within either.
		Resource = [
			"arn:aws:s3:::*www.frostzero.ca",
			"arn:aws:s3:::*www.frostzero.ca/*"
		]},
		# Certificates (screw it, just do whatever, I'll tighten it later)
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:acm:*"]
		},
		# Cloudfront: (ToDo: Tighten this)
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:cloudfront:*"]
		},
		# Route 53: (ToDo: Tighten this)
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:route53:::hostedzone/*", "arn:aws:route53:::change/*"]
		}
	]
}

Project - Personal Nextcloud instance

# Personal nextcloud instance - bindings from AWS to GitLab 
module "oidc-nextcloud" {
	source = "./oidc-connection"
	name = "nextcloud"
	branch_ref = "sea/public/infrastructure/nextcloud:ref_type:branch:ref:main"
	state_key = "nextcloud.tf.state"
	openid_connect_provider_arn = aws_iam_openid_connect_provider.gitlab.arn
	extra_permissions = [
		# EC2 (ToDo: Tighten this up)
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:ec2:*"]
		},
		# Route 53: (ToDo: Tighten this.)
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:route53:::hostedzone/*", "arn:aws:route53:::change/*"]
		}
	]
}

Project - Blood Tablets

# AKA blood tablets
module "oidc-vampire-tablets" {
	source = "./oidc-connection"
	name = "vampire-tablets"
	branch_ref = "sea/public/infrastructure/blood-tablets:ref_type:branch:ref:main"
	state_key = "vampire-tablets.tf.state"
	openid_connect_provider_arn = aws_iam_openid_connect_provider.gitlab.arn
	extra_permissions = [
		# Lambda - for setting up the lambda functions
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:lambda:*"]
		},
		# DynamoDB - for creating the tables
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:dynamodb:*"]
		},
		# IAM stuff - for creating roles for lambdas
		{
			Effect = "Allow"
			Action = ["*"]
			Resource = ["arn:aws:iam::*"]
		}
	]
	
}

OIDC outputs

Here I just spit out the outputs for all of the OIDC stuff that's configured. Oddly enough, I've never had to reference this. Should I even have these things that I don't use?

output "oidc" {
	value = {
		"blog" = module.oidc-blog.o
		"nextcloud" = module.oidc-nextcloud.o
	}
}

Storage - Special Buckets

This file creates special S3 buckets for various purposes like the TF state, legacy storage for my old resumes, cold backups, and a legacy nextcloud instance's storage.

TF State Bucket

All of my tf state goes in here for every project, keyed by project name.

# ========================================
# TF STATE BUCKET 
# ========================================

resource "aws_s3_bucket" "tf-state" {
	bucket = "tf-state.frostzero.ca"
	lifecycle {
		prevent_destroy = true
	}
	tags = {
		project = "tf-state"
	}
}

Legacy CV/resume storage

My CVs are automatically compiled from LaTeX and dumped into a bucket for serving. This is the legacy bucket. I can't destroy it until I'm absolutely certain that nobody is referencing it.

# ========================================
# BUCKET FOR STORING CVs
# ========================================

resource "aws_s3_bucket" "cv-bucket" {
	bucket = "frostzero-cv-pdfs"

	lifecycle {
		prevent_destroy = true
	}
	tags = {
		project = "cv/resume"
	}
}

resource "aws_s3_bucket_public_access_block" "cv-bucket" {
	bucket = aws_s3_bucket.cv-bucket.id
	block_public_acls = false
	block_public_policy = false
	ignore_public_acls = false
	restrict_public_buckets = false
}

resource "aws_s3_bucket_policy" "cv-bucket-public" {
	bucket = aws_s3_bucket.cv-bucket.id
	policy = jsonencode({
		Version = "2012-10-17"
		Statement = [
			{
				Effect = "Allow"
				Principal = "*"
				Action = "s3:GetObject"
				Resource = "${aws_s3_bucket.cv-bucket.arn}/*"
			}
		]
	})
}

# ==========
# AND ASSOCIATED USER FOR UPLOADING VIA GITLAB PIPELINE
# TODO: Replace with granting pipeline permission directly
# ==========

resource "aws_iam_user" "cv-bucket-user" {
	name = "frostzero-cv-pdfs-writer"
	path = "/"
}

resource "aws_iam_access_key" "cv-bucket-user-key" {
	user = aws_iam_user.cv-bucket-user.id
}

resource "aws_iam_user_policy" "cv-bucket-put-policy" {
	user = aws_iam_user.cv-bucket-user.id
	name        = "S3UploadPolicy"

	policy = jsonencode({
		Version = "2012-10-17"
		Statement = [
			{
				Effect = "Allow",
				Action = "s3:PutObject",
				Resource = "${aws_s3_bucket.cv-bucket.arn}/*",
			}
		]
	})
}

output "cv-bucket-user-credentials" {
	sensitive = true
	value = {
		access-key = aws_iam_access_key.cv-bucket-user-key.id
		secret-key = aws_iam_access_key.cv-bucket-user-key.secret
		bucket = aws_s3_bucket.cv-bucket.id
	}
}

output "cv-url" {
	value = "https://${aws_s3_bucket.cv-bucket.id}.s3.ca-central-1.amazonaws.com/oskar-lidelson.pdf"
}

Cold Backups

I backup a bunch of stuff into S3 on the glacier tier: Really old home videos and crap like that. Stuff that, honestly, nobody ever looks at.

I'm aware that if it were all to get blown up, I'd be sad for a few days and then totally forget about it. You shouldn't have attachment to old junk like that.

# ========================================
# COLD BACKUPS
# ========================================

resource "aws_s3_bucket" "cold-backups" {
	bucket = "frostzero-cold-backups"
	lifecycle {
		prevent_destroy = true		
	}
	tags = {
		project = "cold-backups"
	}
}

resource "aws_s3_bucket_lifecycle_configuration" "cold-backups-glacier-migration" {
	bucket = aws_s3_bucket.cold-backups.id

	transition_default_minimum_object_size = "varies_by_storage_class"   # optional – allows <128 KiB objects

	rule {
		id = "MovetoGlacier"
		status = "Enabled"

		filter {} # Apply to all objects

		transition {
			days = 0
			storage_class = "GLACIER"
		}
	}
}

Legacy Nextcloud Storage

My old nextcloud instance stashed everything in a bucket. This was that special bucket. I have yet to finish migrating everything over to the new one.

  • TODO Migate everything to new nextcloud bucket
  • END
    # ========================================
    # NEXTCLOUD STORAGE BACKEND
    # ========================================
    
    resource "aws_s3_bucket" "s3_nc_frostzero_ca" {
    	bucket = "s3.nc.frostzero.ca"
    	lifecycle {
    		prevent_destroy = true
    	}
    	tags = {
    		project = "nextcloud"
    	}
    }
    
    # Let AWS move the objects to an appropriate storage tier:
    resource "aws_s3_bucket_lifecycle_configuration" "nextcloud-glacier-migration" {
    	bucket = aws_s3_bucket.s3_nc_frostzero_ca.id
    
    	transition_default_minimum_object_size = "varies_by_storage_class"   # optional – allows <128 KiB objects
    
    	rule {
    		id = "NextCloudMoveToIntelligentTiering"
    		status = "Enabled"
    		transition {
    			days = 1
    			storage_class = "INTELLIGENT_TIERING"
    		}
    	}
    }
    

Central Networking

This file configures a dual-stack VPC with a few interesting subnets:

  1. An ipv6-only subnet with NAT64 (I'm trying to migrate everything to this wherever possible)
  2. Dualstack subnets (At the moment, they are intentionally periodically broken on IPv4 routing to make it irritating as hell to use them. The ipv6-only subnets are the headache-free way. I use this 'headache method' often.)
module "dualstack-vpc-a" {
	source = "git::https://gitlab.com/sea/public/tf-modules/aws/vpc.git"
	name = "vpc-a"
	description = "terraform-created VPC for all purposes."
	availability-zones = local.availability-zones
	be_dualstack = true # Create a dualstack subnet
	create_nat64 = true # This costs way too much money but at this point, whatever.
	create_ipv6only = true # Create an ipv6-only subnet
	ipv4_cidr_block = "10.0.0.0/16"
	tags = {
		project = "network-infra"
	}
}

locals {
	dualstack-default-subnet = module.dualstack-vpc-a.o.dualstack-subnets[local.default-availability-zone].id
	dualstack-backup-subnet = module.dualstack-vpc-a.o.dualstack-subnets[local.backup-availability-zone].id
}

output "vpc" {
	value = module.dualstack-vpc-a.o
}

OpenTofu Backend Setup

This file defines the Opentofu backend configuration for S3.

I can't for the life of me remember why I called the key z0. I will have to rename it to tf-world at some point, though that will break all of the remote-state sources for everyone else..

terraform {
	backend "s3" {
		profile = "tf-frostzero"
		bucket = "tf-state.frostzero.ca"
		key = "z0.tf.state"
		region = "ca-central-1"
		encrypt = true
		dynamodb_table = "tf-state-locks"
	}		
}

Configuration - Local Vars

This file defines local variables used throughout the configuration.

locals {
	personal-email-addr = "oskar@frostzero.ca"
	ssh-key-dir = "/home/sea/.ssh" # Warning: Not portable, but nobody else runs this code but me anyway.

	# Note: subnets can only be created in ca-central-1{a,b,d}. NOT c.
	# This is not obvious except via http error code returned on trying it. 	
	availability-zones = toset(["ca-central-1a","ca-central-1b", "ca-central-1d"])
	default-availability-zone = "ca-central-1a"
	backup-availability-zone = "ca-central-1b"
}

Massive Output That Has All the Crap in It

I dump all the output together into one massive json object and mark it sensitive. I'm the only one using this anyway.

output "o" {
	sensitive = true
	value = {
		domains = null
		linode = null
		aws = null
		users = null
	}
}

Emacs Buffer-Local Variables - Ignore these

Compiled on 2026-04-11 20:45:06 with GNU Emacs 30.2 (build 1, x86_64-redhat-linux-gnu, GTK+ Version 3.24.51, cairo version 1.18.4) of 2025-11-14 and Org Mode 9.7.11